30 research outputs found

    Parallel Minimum Cuts in Near-linear Work and Low Depth

    Full text link
    We present the first near-linear work and poly-logarithmic depth algorithm for computing a minimum cut in a graph, while previous parallel algorithms with poly-logarithmic depth required at least quadratic work in the number of vertices. In a graph with nn vertices and mm edges, our algorithm computes the correct result with high probability in O(mlog4n)O(m {\log}^4 n) work and O(log3n)O({\log}^3 n) depth. This result is obtained by parallelizing a data structure that aggregates weights along paths in a tree and by exploiting the connection between minimum cuts and approximate maximum packings of spanning trees. In addition, our algorithm improves upon bounds on the number of cache misses incurred to compute a minimum cut

    Sorting by Swaps with Noisy Comparisons

    Full text link
    We study sorting of permutations by random swaps if each comparison gives the wrong result with some fixed probability p<1/2p<1/2. We use this process as prototype for the behaviour of randomized, comparison-based optimization heuristics in the presence of noisy comparisons. As quality measure, we compute the expected fitness of the stationary distribution. To measure the runtime, we compute the minimal number of steps after which the average fitness approximates the expected fitness of the stationary distribution. We study the process where in each round a random pair of elements at distance at most rr are compared. We give theoretical results for the extreme cases r=1r=1 and r=nr=n, and experimental results for the intermediate cases. We find a trade-off between faster convergence (for large rr) and better quality of the solution after convergence (for small rr).Comment: An extended abstract of this paper has been presented at Genetic and Evolutionary Computation Conference (GECCO 2017

    Optimal Dislocation with Persistent Errors in Subquadratic Time

    Get PDF
    We study the problem of sorting N elements in presence of persistent errors in comparisons: In this classical model, each comparison between two elements is wrong independently with some probability p, but repeating the same comparison gives always the same result. The best known algorithms for this problem have running time O(N^2) and achieve an optimal maximum dislocation of O(log N) for constant error probability. Note that no algorithm can achieve dislocation o(log N), regardless of its running time. In this work we present the first subquadratic time algorithm with optimal maximum dislocation: Our algorithm runs in tilde{O}(N^{3/2}) time and guarantees O(log N) maximum dislocation with high probability. Though the first version of our algorithm is randomized, it can be derandomized by extracting the necessary random bits from the results of the comparisons (errors)

    Sorting with Recurrent Comparison Errors

    Get PDF
    We present a sorting algorithm for the case of recurrent random comparison errors. The algorithm essentially achieves simultaneously good properties of previous algorithms for sorting n distinct elements in this model. In particular, it runs in O(n^2) time, the maximum dislocation of the elements in the output is O(log n), while the total dislocation is O(n). These guarantees are the best possible since we prove that even randomized algorithms cannot achieve o(log n) maximum dislocation with high probability, or o(n) total dislocation in expectation, regardless of their running time

    Longest Increasing Subsequence under Persistent Comparison Errors

    Full text link
    We study the problem of computing a longest increasing subsequence in a sequence SS of nn distinct elements in the presence of persistent comparison errors. In this model, every comparison between two elements can return the wrong result with some fixed (small) probability p p , and comparisons cannot be repeated. Computing the longest increasing subsequence exactly is impossible in this model, therefore, the objective is to identify a subsequence that (i) is indeed increasing and (ii) has a length that approximates the length of the longest increasing subsequence. We present asymptotically tight upper and lower bounds on both the approximation factor and the running time. In particular, we present an algorithm that computes an O(logn)O(\log n)-approximation in time O(nlogn)O(n\log n), with high probability. This approximation relies on the fact that that we can approximately sort nn elements in O(nlogn)O(n\log n) time such that the maximum dislocation of an element is at most O(logn)O(\log n). For the lower bounds, we prove that (i) there is a set of sequences, such that on a sequence picked randomly from this set every algorithm must return an Ω(logn)\Omega(\log n)-approximation with high probability, and (ii) any O(logn)O(\log n)-approximation algorithm for longest increasing subsequence requires Ω(nlogn)\Omega(n \log n) comparisons, even in the absence of errors

    Paracrine interactions between primary human macrophages and human fibroblasts enhance murine mammary gland humanization in vivo

    Get PDF
    Abstract Introduction Macrophages comprise an essential component of the mammary microenvironment necessary for normal gland development. However, there is no viable in vivo model to study their role in normal human breast function. We hypothesized that adding primary human macrophages to the murine mammary gland would enhance and provide a novel approach to examine immune-stromal cell interactions during the humanization process. Methods Primary human macrophages, in the presence or absence of ectopic estrogen stimulation, were used to humanize mouse mammary glands. Mechanisms of enhanced humanization were identified by cytokine/chemokine ELISAs, zymography, western analysis, invasion and proliferation assays; results were confirmed with immunohistological analysis. Results The combined treatment of macrophages and estrogen stimulation significantly enhanced the percentage of the total gland humanized and the engraftment/outgrowth success rate. Timecourse analysis revealed the disappearance of the human macrophages by two weeks post-injection, suggesting that the improved overall growth and invasiveness of the fibroblasts provided a larger stromal bed for epithelial cell proliferation and structure formation. Confirming their promotion of fibroblasts humanization, estrogen-stimulated macrophages significantly enhanced fibroblast proliferation and invasion in vitro, as well as significantly increased proliferating cell nuclear antigen (PCNA) positive cells in humanized glands. Cytokine/chemokine ELISAs, zymography and western analyses identified TNFα and MMP9 as potential mechanisms by which estrogen-stimulated macrophages enhanced humanization. Specific inhibitors to TNFα and MMP9 validated the effects of these molecules on fibroblast behavior in vitro, as well as by immunohistochemical analysis of humanized glands for human-specific MMP9 expression. Lastly, glands humanized with macrophages had enhanced engraftment and tumor growth compared to glands humanized with fibroblasts alone. Conclusions Herein, we demonstrate intricate immune and stromal cell paracrine interactions in a humanized in vivo model system. We confirmed our in vivo results with in vitro analyses, highlighting the value of this model to interchangeably substantiate in vitro and in vivo results. It is critical to understand the signaling networks that drive paracrine cell interactions, for tumor cells exploit these signaling mechanisms to support their growth and invasive properties. This report presents a dynamic in vivo model to study primary human immune/fibroblast/epithelial interactions and to advance our knowledge of the stromal-derived signals that promote tumorigenesis

    From Sorting to Optimization: Coping with Error-Prone Comparisons

    No full text
    To cope with errors in computation has an ongoing tradition in computer science. In fact, errors can represent hardware faults, imprecise measurements, mistakes in judgments made by humans, or even may be deliberately introduced in order to save certain resources. In this thesis we consider a model, in which comparisons can fail. In particular, we assume that such comparisons allow us to query the linear order of two given elements but are prone to making errors: with some probability 1−p, the result of the comparison is correct, while with some small probability p, we get back the reverse order of the two queried elements. We distinguish between persistent and non-persistent comparison errors, where the former means that repeating the same comparison several times always yields the same result, whereas the latter means that every repetition of a comparison has again the same probabilities to be correct or wrong. In this regard, we are especially interested in algorithms that only use ordinal information of the elements to compute a solution to a given problem. Prominent examples are combinatorial problems such as comparison-based sorting, one focus in this thesis. But there are also many other optimization problems whose optimum solution or an approximation thereof can be computed without knowing the exact values of the elements. We begin this thesis with considering the problem of sorting in the presence of non-persistent comparison errors. Starting with an input sequence of n distinct elements, we examine two natural sorting strategies that repeatedly compare and swap two elements within the sequence: in the first, we restrict to pairs of elements whose positions in the current sequence are adjacent, while in the second, we allow to compare and swap two arbitrary elements. The sorting strategies can be considered as Markov chains, and we show that restricting to adjacent swaps yields a better “sortedness” of a sequence in stationary distribution than allowing arbitrary swaps, namely O(n) vs. O(n^2) total dislocation in expectation. (The dislocation of an element is the absolute difference between its position and its rank. The total dislocation of a sequence is the sum of all dislocations of the elements.) In regard to persistent comparison errors, we optimally solve the problem of sorting n elements in terms of running time and sortedness. In particular, we present an algorithm that in O(n log n) time returns a permutation of the elements achieving both O(log n) maximum dislocation of any element and O(n) total dislocation with high probability. Additionally, we show that this is the best we can hope for in this error model, as we provide tight lower bounds on the two dislocation measures. The second combinatorial problem that we consider is computing, in the presence of persistent comparison errors, a longest increasing subsequence of a given input sequence containing n distinct elements. Here we present an O(log n)-approximation algorithm that returns in O(n log n) time a subsequence of the input that is guaranteed to be increasing and whose length is at least Ω(1/log n) times the length of the correct answer. Moreover, we provide lower bounds to show that both the running time and the approximation factor are asymptotically tight. We then turn to more general optimization problems and allow algorithms to perform a few comparisons that are always correct, but obviously much more costly than their error-prone counterparts. The intention behind these “always-correct” comparisons is to guarantee that, despite of errors in the other comparisons, an approximation to the optimum solution of such a problem can still be found. We therefore extend our model of computation to have two types of comparisons: always-correct and error-prone, where for the latter we assume that the errors are persistent. In this model, we propose to study a natural complexity measure which accounts for the number of operations of either type of comparisons separately. For a large class of optimization problems, we finally show that a constant approximation can be computed using only a polylogarithmic number of always-correct comparisons, and O(n log n) error-prone comparisons. In particular, the result applies to k-extendible systems, which includes several NP-hard problems, as well as matroids as a special case

    Sort well with energy-constrained comparisons

    No full text
    We study very simple sorting algorithms based on a probabilistic comparator model. In our model, errors in comparing two elements are due to (1) the energy or effort put in the comparison and (2) the difference between the compared elements. Such algorithms keep comparing pairs of randomly chosen elements, and they correspond to Markovian processes. The study of these Markov chains reveals an interesting phenomenon. Namely, in several cases, the algorithm which repeatedly compares only adjacent elements is better than the one making arbitrary comparisons: on the long-run, the former algorithm produces sequences that are "better sorted". The analysis of the underlying Markov chain poses new interesting questions as the latter algorithm yields a non-reversible chain and therefore its stationary distribution seems difficult to calculate explicitly
    corecore